给定一组场景的图像,从新颖的观点和照明条件中重新渲染了这个场景是计算机视觉和图形中的一个重要且具有挑战性的问题。一方面,计算机视觉中的大多数现有作品通常对图像形成过程(例如直接照明和预定义的材料,以使场景参数估计可进行。另一方面,成熟的计算机图形工具允许对所有场景参数进行复杂的照片现实光传输的建模。结合了这些方法,我们通过学习神经预先计算的辐射转移功能,提出了一种在新观点下重新考虑的场景方法,该方法使用新颖的环境图隐含地处理全球照明效应。在单个未知的照明条件下,我们的方法可以仅在场景的一组真实图像上进行监督。为了消除训练期间的任务,我们在训练过程中紧密整合了可区分的路径示踪剂,并提出了合成的OLAT和真实图像丢失的组合。结果表明,场景参数的恢复分离在目前的现状,因此,我们的重新渲染结果也更加现实和准确。
translated by 谷歌翻译
人类的感知可靠地识别3D场景的可移动和不可移动的部分,并通过不完整的观测来完成对象和背景的3D结构。我们不是通过标记的示例来学习此技能,而只是通过观察对象移动来学习。在这项工作中,我们提出了一种方法,该方法在训练时间观察未标记的多视图视频,并学会绘制对复杂场景的单个图像观察,例如带有汽车的街道,将其绘制为3D神经场景表示,该表演将其分解为可移动和可移动和不可移动的零件,同时合理地完成其3D结构。我们通过2D神经地面计划分别参数可移动和不可移动的场景部分。这些地面计划是与接地平面对齐的2D网格,可以将其局部解码为3D神经辐射场。我们的模型通过神经渲染受过训练的自我监督。我们证明,使用简单的启发式方法,例如提取对象以对象的3D表示,新颖的视图合成,实例段和3D边界框预测,预测,预测,诸如提取以对象为中心的3D表示,诸如提取街道规模的3D场景中的各种下游任务可以实现各种下游任务。强调其作为数据效率3D场景理解模型的骨干的价值。这种分离进一步通过对象操纵(例如删除,插入和刚体运动)进行了现场编辑。
translated by 谷歌翻译
2D图像是对用几何形状,材料和照明组件描绘的3D物理世界的观察。从2D图像(也称为逆渲染)中恢复这些基本的内在组件通常需要有监督的设置,并从多个观点和照明条件中收集的配对图像,这是资源要求的。在这项工作中,我们提出了GAN2X,这是一种无监督的逆渲染方法,仅使用未配对的图像进行训练。与以前主要集中在3D形状的形状 - 从GAN的方法不同,我们首次尝试通过利用GAN生成的伪配对数据来恢复非陆层材料的性能。为了实现精确的逆渲染,我们设计了一种镜面感知的神经表面表示,该表示连续建模几何和材料特性。采用基于阴影的改进技术来进一步提炼目标图像中的信息并恢复更多细节。实验表明,GAN2X可以准确地将2D图像分解为不同对象类别的3D形状,反照率和镜面特性,并实现无监督的单视图3D面部重建的最先进性能。我们还显示了其在下游任务中的应用,包括真实的图像编辑和将2D GAN抬高到分解3D GAN。
translated by 谷歌翻译
Neural 3D implicit representations learn priors that are useful for diverse applications, such as single- or multiple-view 3D reconstruction. A major downside of existing approaches while rendering an image is that they require evaluating the network multiple times per camera ray so that the high computational time forms a bottleneck for downstream applications. We address this problem by introducing a novel neural scene representation that we call the directional distance function (DDF). To this end, we learn a signed distance function (SDF) along with our DDF model to represent a class of shapes. Specifically, our DDF is defined on the unit sphere and predicts the distance to the surface along any given direction. Therefore, our DDF allows rendering images with just a single network evaluation per camera ray. Based on our DDF, we present a novel fast algorithm (FIRe) to reconstruct 3D shapes given a posed depth map. We evaluate our proposed method on 3D reconstruction from single-view depth images, where we empirically show that our algorithm reconstructs 3D shapes more accurately and it is more than 15 times faster (per iteration) than competing methods.
translated by 谷歌翻译
综合照片 - 现实图像和视频是计算机图形的核心,并且是几十年的研究焦点。传统上,使用渲染算法(如光栅化或射线跟踪)生成场景的合成图像,其将几何形状和材料属性的表示为输入。统称,这些输入定义了实际场景和呈现的内容,并且被称为场景表示(其中场景由一个或多个对象组成)。示例场景表示是具有附带纹理的三角形网格(例如,由艺术家创建),点云(例如,来自深度传感器),体积网格(例如,来自CT扫描)或隐式曲面函数(例如,截短的符号距离)字段)。使用可分辨率渲染损耗的观察结果的这种场景表示的重建被称为逆图形或反向渲染。神经渲染密切相关,并将思想与经典计算机图形和机器学习中的思想相结合,以创建用于合成来自真实观察图像的图像的算法。神经渲染是朝向合成照片现实图像和视频内容的目标的跨越。近年来,我们通过数百个出版物显示了这一领域的巨大进展,这些出版物显示了将被动组件注入渲染管道的不同方式。这种最先进的神经渲染进步的报告侧重于将经典渲染原则与学习的3D场景表示结合的方法,通常现在被称为神经场景表示。这些方法的一个关键优势在于它们是通过设计的3D-一致,使诸如新颖的视点合成捕获场景的应用。除了处理静态场景的方法外,我们还涵盖了用于建模非刚性变形对象的神经场景表示...
translated by 谷歌翻译
生成的对抗性模型(GANS)继续在静止图像的视觉质量方面产生进步,以及时间相关的学习。但是,很少有效地设法将这两个有趣的功能组合用于综合视频内容:大多数方法需要广泛的训练数据集来学习时间相关性,同时在输出的分辨率和视觉质量中相当有限。我们提出了一种新的视频综合问题方法,有助于大大提高视觉质量,大大减少生成视频所需的培训数据和资源的量。我们的配方将空间域分开,其中从时间域中合成单个帧,其中产生运动。对于空间域,我们使用预先训练的样式手册网络,潜在的空间允许控制它培训的对象的外观。该模型的表现力量使我们能够在样式潜在空间中嵌入我们的培训视频。然后,我们的时间架构不受RGB帧的序列培训,而是验证RGB帧的序列,而是在样式龙舌码的序列上。样式卡空间的有利特性简化了时间相关的发现。我们证明,只需10分钟的镜头为1个受试者约6小时即可培训我们的时间架构就足够了。在培训之后,我们的模型不仅可以为培训主题生成新的纵向视频,还可以为任何可以嵌入在样式卡空间中的任何随机对象。
translated by 谷歌翻译
Figure 1. Given a monocular image sequence, NR-NeRF reconstructs a single canonical neural radiance field to represent geometry and appearance, and a per-time-step deformation field. We can render the scene into a novel spatio-temporal camera trajectory that significantly differs from the input trajectory. NR-NeRF also learns rigidity scores and correspondences without direct supervision on either. We can use the rigidity scores to remove the foreground, we can supersample along the time dimension, and we can exaggerate or dampen motion.
translated by 谷歌翻译
InputOutput Input Output Fig. 1. Unlike current face reenactment approaches that only modify the expression of a target actor in a video, our novel deep video portrait approach enables full control over the target by transferring the rigid head pose, facial expression and eye motion with a high level of photorealism.We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the irst to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modiied target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network ś thus taking full control of the
translated by 谷歌翻译
We introduce camouflaged data poisoning attacks, a new attack vector that arises in the context of machine unlearning and other settings when model retraining may be induced. An adversary first adds a few carefully crafted points to the training dataset such that the impact on the model's predictions is minimal. The adversary subsequently triggers a request to remove a subset of the introduced points at which point the attack is unleashed and the model's predictions are negatively affected. In particular, we consider clean-label targeted attacks (in which the goal is to cause the model to misclassify a specific test point) on datasets including CIFAR-10, Imagenette, and Imagewoof. This attack is realized by constructing camouflage datapoints that mask the effect of a poisoned dataset.
translated by 谷歌翻译
State-of-the-art pre-trained language models (PLMs) outperform other models when applied to the majority of language processing tasks. However, PLMs have been found to degrade in performance under distribution shift, a phenomenon that occurs when data at test-time does not come from the same distribution as the source training set. Equally as challenging is the task of obtaining labels in real-time due to issues like long-labeling feedback loops. The lack of adequate methods that address the aforementioned challenges constitutes the need for approaches that continuously adapt the PLM to a distinct distribution. Unsupervised domain adaptation adapts a source model to an unseen as well as unlabeled target domain. While some techniques such as data augmentation can adapt models in several scenarios, they have only been sparsely studied for addressing the distribution shift problem. In this work, we present an approach (MEMO-CL) that improves the performance of PLMs at test-time under distribution shift. Our approach takes advantage of the latest unsupervised techniques in data augmentation and adaptation to minimize the entropy of the PLM's output distribution. MEMO-CL operates on a batch of augmented samples from a single observation in the test set. The technique introduced is unsupervised, domain-agnostic, easy to implement, and requires no additional data. Our experiments result in a 3% improvement over current test-time adaptation baselines.
translated by 谷歌翻译